569 research outputs found

    Multiway Cut, Pairwise Realizable Distributions, and Descending Thresholds

    Full text link
    We design new approximation algorithms for the Multiway Cut problem, improving the previously known factor of 1.32388 [Buchbinder et al., 2013]. We proceed in three steps. First, we analyze the rounding scheme of Buchbinder et al., 2013 and design a modification that improves the approximation to (3+sqrt(5))/4 (approximately 1.309017). We also present a tight example showing that this is the best approximation one can achieve with the type of cuts considered by Buchbinder et al., 2013: (1) partitioning by exponential clocks, and (2) single-coordinate cuts with equal thresholds. Then, we prove that this factor can be improved by introducing a new rounding scheme: (3) single-coordinate cuts with descending thresholds. By combining these three schemes, we design an algorithm that achieves a factor of (10 + 4 sqrt(3))/13 (approximately 1.30217). This is the best approximation factor that we are able to verify by hand. Finally, we show that by combining these three rounding schemes with the scheme of independent thresholds from Karger et al., 2004, the approximation factor can be further improved to 1.2965. This approximation factor has been verified only by computer.Comment: This is an updated version and is the full version of STOC 2014 pape

    Predicting Multi-actor collaborations using Hypergraphs

    Full text link
    Social networks are now ubiquitous and most of them contain interactions involving multiple actors (groups) like author collaborations, teams or emails in an organizations, etc. Hypergraphs are natural structures to effectively capture multi-actor interactions which conventional dyadic graphs fail to capture. In this work the problem of predicting collaborations is addressed while modeling the collaboration network as a hypergraph network. The problem of predicting future multi-actor collaboration is mapped to hyperedge prediction problem. Given that the higher order edge prediction is an inherently hard problem, in this work we restrict to the task of predicting edges (collaborations) that have already been observed in past. In this work, we propose a novel use of hyperincidence temporal tensors to capture time varying hypergraphs and provides a tensor decomposition based prediction algorithm. We quantitatively compare the performance of the hypergraphs based approach with the conventional dyadic graph based approach. Our hypothesis that hypergraphs preserve the information that simple graphs destroy is corroborated by experiments using author collaboration network from the DBLP dataset. Our results demonstrate the strength of hypergraph based approach to predict higher order collaborations (size>4) which is very difficult using dyadic graph based approach. Moreover, while predicting collaborations of size>2 hypergraphs in most cases provide better results with an average increase of approx. 45% in F-Score for different sizes = {3,4,5,6,7}

    Sub-10nm Transistors for Low Power Computing: Tunnel FETs and Negative Capacitance FETs

    Get PDF
    One of the major roadblocks in the continued scaling of standard CMOS technology is its alarmingly high leakage power consumption. Although circuit and system level methods can be employed to reduce power, the fundamental limit in the overall energy efficiency of a system is still rooted in the MOSFET operating principle: an injection of thermally distributed carriers, which does not allow subthreshold swing (SS) lower than 60mV/dec at room temperature. Recently, a new class of steep-slope devices like Tunnel FETs (TFETs) and Negative-Capacitance FETs (NCFETs) have garnered intense interest due to their ability to surpass the 60mV/dec limit on SS at room temperature. The focus of this research is on the simulation and design of TFETs and NCFETs for ultra-low power logic and memory applications. Using full band quantum mechanical model within the Non-Equilibrium Greens Function (NEGF) formalism, source-underlapping has been proposed as an effective technique to lower the SS in GaSb-InAs TFETs. Band-tail states, associated with heavy source doping, are shown to significantly degrade the SS in TFETs from their ideal value. To solve this problem, undoped source GaSb-InAs TFET in an i-i-n configuration is proposed. A detailed circuit-to-system level evaluation is performed to investigate the circuit level metrics of the proposed devices. To demonstrate their potential in a memory application, a 4T gain cell (GC) is proposed, which utilizes the low-leakage and enhanced drain capacitance of TFETs to realize a robust and long retention time GC embedded-DRAMs. The device/circuit/system level evaluation of proposed TFETs demonstrates their potential for low power digital applications. The second part of the thesis focuses on the design space exploration of hysteresis-free Negative Capacitance FETs (NCFETs). A cross-architecture analysis using HfZrOx ferroelectric (FE-HZO) integrated on bulk MOSFET, fully-depleted SOI-FETs, and sub-10nm FinFETs shows that FDSOI and FinFET configurations greatly benefit the NCFET performance due to their undoped body and improved gate-control which enables better capacitance matching with the ferroelectric. A low voltage NC-FinFET operating down to 0.25V is predicted using ultra-thin 3nm FE-HZO. Next, we propose one-transistor ferroelectric NOR type (Fe-NOR) non-volatile memory based on HfZrOx ferroelectric FETs (FeFETs). The enhanced drain-channel coupling in ultrashort channel FeFETs is utilized to dynamically modulate memory window of storage cells thereby resulting in simple erase-, program-and read-operations. The simulation analysis predicts sub-1V program/erase voltages in the proposed Fe-NOR memory array and therefore presents a significantly lower power alternative to conventional FeRAM and NOR flash memories

    Optimizing Software Quality through Automation Testing

    Get PDF
    The current business application is large, multi-tired, distributed and integrated which require higher level of sophistication to implement and manage. The current quality methodologies rely on manual work which makes the application venerable due to its limitation and entails higher cost. Running complete regression suite manually every time is cumbersome and often do not complete due to either time or resource limitation. Finding more defects during testing life cycle has tremendous effect on the quality of an application. This project is intended to run more number of tests in lesser time and reduction in overall cost of the project and this has been achieved by implementing an automation tool. Various tools and frameworks are studied to fulfill this requirement, also the results are stated and compared. The implication of implementing an automation test tool is higher software quality assurance

    Privacy-Preserving Public Information for Sequential Games

    Full text link
    In settings with incomplete information, players can find it difficult to coordinate to find states with good social welfare. For example, in financial settings, if a collection of financial firms have limited information about each other's strategies, some large number of them may choose the same high-risk investment in hopes of high returns. While this might be acceptable in some cases, the economy can be hurt badly if many firms make investments in the same risky market segment and it fails. One reason why many firms might end up choosing the same segment is that they do not have information about other firms' investments (imperfect information may lead to `bad' game states). Directly reporting all players' investments, however, raises confidentiality concerns for both individuals and institutions. In this paper, we explore whether information about the game-state can be publicly announced in a manner that maintains the privacy of the actions of the players, and still suffices to deter players from reaching bad game-states. We show that in many games of interest, it is possible for players to avoid these bad states with the help of privacy-preserving, publicly-announced information. We model behavior of players in this imperfect information setting in two ways -- greedy and undominated strategic behaviours, and we prove guarantees on social welfare that certain kinds of privacy-preserving information can help attain. Furthermore, we design a counter with improved privacy guarantees under continual observation

    Observation Method: A Review Study

    Get PDF
    Observation method is described as a method to observe and describe the behavior of a subject and it involves the basic technique of simply watching the phenomena until some hunch or insight is gained. We are almost constantly engaged in observation. It is our basic method of obtaining information about the world around us”. Man’s eye has been a basic tool for observation for a long time. Now-a-days, a number of tools like camera, video-camera, tape-recorder etc. Are also being employed by researchers. They are also utilizing ‘laboratory conditions’ to study certain aspects. The term includes several types, techniques, and approaches, which may be difficult to compare in terms of enactment and anticipated results, the choice must be adapted to the research problem and the scientific context. As a matter of fact, observation may be regarded as the basis of everyday social life for most people; we are diligent observers of behaviors and of the material surroundings. “We watch, evaluate, draw conclusions, and make comments on interactions and relations”. However, observation raised to the rank of a scientific method should be carried out systematically, purposefully, and on scientific grounds—even if curiosity and fascination may still be its very important components. This Paper discusses the meaning and purpose of the observation method of data collection. It also dwells on how to plan for and the different types of observation. The advantages and disadvantages are also stated
    • 

    corecore